Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Optimizing compilers, such as LLVM, generatedebug informationin machine code to aid debugging. This information is particularly important when debugging optimized code, as modern software is often compiled with optimization enabled. However, properly updating debug information to reflect code transformations during optimization is a complex task that often relies on manual effort. This complexity makes the process prone to errors, which can lead to incorrect or lost debug information. Finding and fixing potential debug information update errors is vital to maintaining the accuracy and reliability of the overall debugging process. To our knowledge, no existing techniques can rectify debug information update errors in LLVM. While black-box testing approaches can find such bugs, they can neither pinpoint the root causes nor suggest fixes. To fill the gap, we propose thefirsttechnique torobustifydebug information updates in LLVM. In particular, our robustification approach can find and fix incorrect debug location updates. Central to our approach is the observation that the debug locations in the original and optimized programs must satisfy aconformance relation. The relation ensures that LLVM optimizations do not introduce extraneous debug location information on the control-flow paths of the optimized programs. We introducecontrol-flow conformance analysis, a novel approach that determines the reference updates ensuring the conformance relation by observing the execution of LLVM optimization passes and analyzing the debug locations in the control-flow graphs of programs under optimization. The determined reference updates are then used to check developer-written updates in LLVM. When discrepancies arise, the reference updates serve as the update skeletons to guide the fixing. We realized our approach as a tool named MetaLoc, which determines proper debug location updates for LLVM optimizations. More importantly, with MetaLoc, we have reported and patched 46 previously unknown update errors in LLVM. All the patches, along with 22 new regression tests, have been merged into the LLVM codebase, effectively improving the accuracy and reliability of debug information in all programs optimized by LLVM. Furthermore, our approach uncovered and led to corrections in two issues within LLVM’s official documentation on debug information updates.more » « lessFree, publicly-accessible full text available June 10, 2026
-
Free, publicly-accessible full text available July 2, 2026
-
The increased parallelism in modern processors has sparked interest in offloading security policy enforcement to processes or hardware operating in parallel with the main application. This approach can reduce application latency, enhance security, and improve compatibility. However, existing software solutions often incur high overheads and are susceptible to memory corruption attacks, while hardware solutions tend to be inflexible and require substantial modifications to the processor. In this paper, we present SIDECAR, a novel approach that offloads security checks to run concurrently with applications by leveraging the debugging infrastructure available in commodity processors. Specifically, we utilize softwaredriven logging (SDL) extensions in Intel and Arm processors to create secure, append-only channels between applications and security monitors. We build and evaluate a prototype of SIDECAR for the x86-64 and Aarch64 architectures. To demonstrate its utility, we adapt well-known security defenses within SIDECAR, providing control-flow integrity (CFI), shadow call stacks (SCS), and memory error checking (ASAN). Our evaluation shows that these extensions perform better on the Intel architecture. In terms of defenses, SIDECAR reduces the latency of CFI in the tested real-world applications by an average of 30%, offers enhanced security with similar overhead for SCS, and is versatile enough to support complex defenses like ASAN. Furthermore, our security monitor for CFI+SCS is 30 times more efficient compared to previous work.more » « lessFree, publicly-accessible full text available December 9, 2025
-
Abstract Marine bivalves are important components of ecosystems and exploited by humans for food across the world, but the intrinsic vulnerability of exploited bivalve species to global changes is poorly known. Here, we expand the list of shallow-marine bivalves known to be exploited worldwide, with 720 exploited bivalve species added beyond the 81 in the United Nations FAO Production Database, and investigate their diversity, distribution and extinction vulnerability using a metric based on ecological traits and evolutionary history. The added species shift the richness hotspot of exploited species from the northeast Atlantic to the west Pacific, with 55% of bivalve families being exploited, concentrated mostly in two major clades but all major body plans. We find that exploited species tend to be larger in size, occur in shallower waters, and have larger geographic and thermal ranges—the last two traits are known to confer extinction-resistance in marine bivalves. However, exploited bivalve species in certain regions such as the tropical east Atlantic and the temperate northeast and southeast Pacific, are among those with high intrinsic vulnerability and are a large fraction of regional faunal diversity. Our results pinpoint regional faunas and specific taxa of likely concern for management and conservation.more » « less
-
We present a psychometric evaluation of the Cybersecurity Curriculum Assessment (CCA), completed by 193 students from seven colleges and universities. The CCA builds on our prior work developing and validating a Cybersecurity Concept Inventory (CCI), which measures students' conceptual understanding of cybersecurity after a first course in the area. The CCA deepens the conceptual complexity and technical depth expectations, assessing conceptual knowledge of students who had completed multiple courses in cybersecurity. We review our development of the CCA and present our evaluation of the instrument using Classical Test Theory and Item-Response Theory. The CCA is a difficult assessment, providing reliable measurements of student knowledge and deeper information about high-performing students.more » « less
An official website of the United States government
